23 research outputs found

    An intelligent Othello player combining machine learning and game specific heuristics

    Get PDF
    Artificial intelligence applications in board games have been around as early as the 1950\u27s, and computer programs have been developed for games including Checkers, Chess, and Go with varying results. Although general game-tree search algorithms have been designed to work on games meeting certain requirements (e.g. zero-sum, two-player, perfect or imperfect information, etc.), the best results, however, come from combining these with specific knowledge of game strategies. In this MS thesis, we present an intelligent Othello game player that combines game-specific heuristics with machine learning techniques in move selection. Five game specific heuristics, namely corner detection, killer move detection, blocking, blacklisting, and pattern recognition have been proposed. Some of these heuristics can be generalized to fit other games by removing the Othello specific components and replacing them with specific knowledge of the target game. For machine learning techniques, the normal Minimax algorithm along with a custom variation is used as a base. Genetic algorithms and neural networks are applied to learn the static evaluation function. The five game specific techniques (or a subset of) are to be executed first and if no move is found, Minimax game tree search is performed. All techniques and several subsets of them have been tested against three deterministic agents, one non-deterministic agent, and three human players of varying skill levels. The results show that the combined Othello player performs better in general. We present the study results on the basis of four main metrics: performance (percentage of games won), speed, predictability of opponent, and usage situation

    Real-time Shadows for Gigapixel Displacement Maps

    Get PDF
    Shadows portray helpful information in scenes. From a scientific visualization standpoint, they help to add data without unnecessary clutter. In video games they add realism and depth. In common graphics pipelines, due to the independent and parallel rendering of geometric primitives, shadows are difficult to achieve. Objects require knowledge of each other and therefore multiple renders are needed to collect the necessary data. The collection of this data comes with its own set of trade offs. Our research involves adding shadows into a lunar rendering framework developed by Dr. Robert Kooima. The NASA-collected data contains a multi-gigapixel displacement map describing the lunar topology. This map does not fit entirely into main memory and therefore out-of-core paging is utilized to achieve real-time speeds. Current shadow techniques do not attempt to generate occluder data on such a scale, and therefore we have developed a novel approach to fit this situation. By using a chain of pre-processing steps, we analyze the structure of the displacement map and calculate horizon lines at each vertex. This information is saved into several images and used to generate shadows in a single pass, maintaining real-time speeds. The algorithm is even capable of generating soft shadows without extra information or loss of speed. We compare our algorithm with common approaches in the field as well as two forms of ground truth; one from ray tracing and the other from the gigapixel lunar texture data, showing real shadows at the time it was collected

    Medial longitudinal arch development of school children : The College of Podiatry Annual Conference 2015: meeting abstracts

    Get PDF
    Background Foot structure is often classified into flat foot, neutral and high arch type based on the variability of the Medial Longitudinal Arch (MLA). To date, the literature provided contrasting evidence on the age when MLA development stabilises in children. The influence of footwear on MLA development is also unknown. Aim This study aims to (i) clarify whether the MLA is still changing in children from age 7 to 9 years old and (ii) explore the relationship between footwear usage and MLA development, using a longitudinal approach. Methods We evaluated the MLA of 111 healthy school children [age = 6.9 (0.3) years] using three parameters [arch index (AI), midfoot peak pressure (PP) and maximum force (MF: % of body weight)] extracted from dynamic foot loading measurements at baseline, 10-month and 22-month follow-up. Information on the type of footwear worn was collected using survey question. Linear mixed modelling was used to test for differences in the MLA over time. Results Insignificant changes in all MLA parameters were observed over time [AI: P = .15; PP: P = .84; MF: P = .91]. When gender was considered, the AI of boys decreased with age [P = .02]. Boys also displayed a flatter MLA than girls at age 6.9 years [AI: mean difference = 0.02 (0.01, 0.04); P = .02]. At baseline, subjects who wore close-toe shoes displayed the lowest MLA overall [AI/PP/MF: P < .05]. Subjects who used slippers when commencing footwear use experienced higher PP than those who wore sandals [mean difference = 31.60 (1.44, 61.75) kPa; post-hoc P = .04]. Discussion and conclusion Our findings suggested that the MLA of children remained stable from 7 to 9 years old, while gender and the type of footwear worn during childhood may influence MLA development. Clinicians may choose to commence therapy when a child presents with painful flexible flat foot at age 7 years, and may discourage younger children from wearing slippers when they commence using footwear

    Guidelines for the use and interpretation of assays for monitoring autophagy (3rd edition)

    Get PDF
    In 2008 we published the first set of guidelines for standardizing research in autophagy. Since then, research on this topic has continued to accelerate, and many new scientists have entered the field. Our knowledge base and relevant new technologies have also been expanding. Accordingly, it is important to update these guidelines for monitoring autophagy in different organisms. Various reviews have described the range of assays that have been used for this purpose. Nevertheless, there continues to be confusion regarding acceptable methods to measure autophagy, especially in multicellular eukaryotes. For example, a key point that needs to be emphasized is that there is a difference between measurements that monitor the numbers or volume of autophagic elements (e.g., autophagosomes or autolysosomes) at any stage of the autophagic process versus those that measure fl ux through the autophagy pathway (i.e., the complete process including the amount and rate of cargo sequestered and degraded). In particular, a block in macroautophagy that results in autophagosome accumulation must be differentiated from stimuli that increase autophagic activity, defi ned as increased autophagy induction coupled with increased delivery to, and degradation within, lysosomes (inmost higher eukaryotes and some protists such as Dictyostelium ) or the vacuole (in plants and fungi). In other words, it is especially important that investigators new to the fi eld understand that the appearance of more autophagosomes does not necessarily equate with more autophagy. In fact, in many cases, autophagosomes accumulate because of a block in trafficking to lysosomes without a concomitant change in autophagosome biogenesis, whereas an increase in autolysosomes may reflect a reduction in degradative activity. It is worth emphasizing here that lysosomal digestion is a stage of autophagy and evaluating its competence is a crucial part of the evaluation of autophagic flux, or complete autophagy. Here, we present a set of guidelines for the selection and interpretation of methods for use by investigators who aim to examine macroautophagy and related processes, as well as for reviewers who need to provide realistic and reasonable critiques of papers that are focused on these processes. These guidelines are not meant to be a formulaic set of rules, because the appropriate assays depend in part on the question being asked and the system being used. In addition, we emphasize that no individual assay is guaranteed to be the most appropriate one in every situation, and we strongly recommend the use of multiple assays to monitor autophagy. Along these lines, because of the potential for pleiotropic effects due to blocking autophagy through genetic manipulation it is imperative to delete or knock down more than one autophagy-related gene. In addition, some individual Atg proteins, or groups of proteins, are involved in other cellular pathways so not all Atg proteins can be used as a specific marker for an autophagic process. In these guidelines, we consider these various methods of assessing autophagy and what information can, or cannot, be obtained from them. Finally, by discussing the merits and limits of particular autophagy assays, we hope to encourage technical innovation in the field

    Reducing the environmental impact of surgery on a global scale: systematic review and co-prioritization with healthcare workers in 132 countries

    Get PDF
    Abstract Background Healthcare cannot achieve net-zero carbon without addressing operating theatres. The aim of this study was to prioritize feasible interventions to reduce the environmental impact of operating theatres. Methods This study adopted a four-phase Delphi consensus co-prioritization methodology. In phase 1, a systematic review of published interventions and global consultation of perioperative healthcare professionals were used to longlist interventions. In phase 2, iterative thematic analysis consolidated comparable interventions into a shortlist. In phase 3, the shortlist was co-prioritized based on patient and clinician views on acceptability, feasibility, and safety. In phase 4, ranked lists of interventions were presented by their relevance to high-income countries and low–middle-income countries. Results In phase 1, 43 interventions were identified, which had low uptake in practice according to 3042 professionals globally. In phase 2, a shortlist of 15 intervention domains was generated. In phase 3, interventions were deemed acceptable for more than 90 per cent of patients except for reducing general anaesthesia (84 per cent) and re-sterilization of ‘single-use’ consumables (86 per cent). In phase 4, the top three shortlisted interventions for high-income countries were: introducing recycling; reducing use of anaesthetic gases; and appropriate clinical waste processing. In phase 4, the top three shortlisted interventions for low–middle-income countries were: introducing reusable surgical devices; reducing use of consumables; and reducing the use of general anaesthesia. Conclusion This is a step toward environmentally sustainable operating environments with actionable interventions applicable to both high– and low–middle–income countries

    Global burden of bacterial antimicrobial resistance in 2019 : a systematic analysis

    No full text

    Expanded encyclopaedias of DNA elements in the human and mouse genomes

    No full text
    AbstractThe human and mouse genomes contain instructions that specify RNAs and proteins and govern the timing, magnitude, and cellular context of their production. To better delineate these elements, phase III of the Encyclopedia of DNA Elements (ENCODE) Project has expanded analysis of the cell and tissue repertoires of RNA transcription, chromatin structure and modification, DNA methylation, chromatin looping, and occupancy by transcription factors and RNA-binding proteins. Here we summarize these efforts, which have produced 5,992 new experimental datasets, including systematic determinations across mouse fetal development. All data are available through the ENCODE data portal (https://www.encodeproject.org), including phase II ENCODE1 and Roadmap Epigenomics2 data. We have developed a registry of 926,535 human and 339,815 mouse candidate cis-regulatory elements, covering 7.9 and 3.4% of their respective genomes, by integrating selected datatypes associated with gene regulation, and constructed a web-based server (SCREEN; http://screen.encodeproject.org) to provide flexible, user-defined access to this resource. Collectively, the ENCODE data and registry provide an expansive resource for the scientific community to build a better understanding of the organization and function of the human and mouse genomes.11Nsciescopu

    Perspectives on ENCODE

    No full text
    The Encylopedia of DNA Elements (ENCODE) Project launched in 2003 with the long-term goal of developing a comprehensive map of functional elements in the human genome. These included genes, biochemical regions associated with gene regulation (for example, transcription factor binding sites, open chromatin, and histone marks) and transcript isoforms. The marks serve as sites for candidate cis-regulatory elements (cCREs) that may serve functional roles in regulating gene expression1. The project has been extended to model organisms, particularly the mouse. In the third phase of ENCODE, nearly a million and more than 300,000 cCRE annotations have been generated for human and mouse, respectively, and these have provided a valuable resource for the scientific community.11Nsciescopu
    corecore